More information
Description
People living in modern, digitized societies have to process an unprecedented amount of online information on a daily basis. Crucially, they often actively participate in the development and spread of this information through the online social networks they belong to. Although these changes in the distribution of information have a democratizing potential because they take power away from media companies and governments, and empower a large number of formerly passive consumers, they also bring serious challenges. Our research project asks the overarching question: what can make online users immune against the spread of misinformation and bad ideas? We call this hypothetical resilience to online misinformation “webimmunity.” To answer this question, the project combines perspectives from epidemiology, psychology, philosophy and computer science. Its main goal is to develop tools that can reliably assess the immunity of individual actors and entire networks of online users against misinformation. Moreover, it aims to test how this webimmunitization influences behavior online and ultimately how it can be increased. Importantly, the project also has a clear focus on ethical dilemmas related to the concept of webimmunization. For instance, what constitutes misinformation may not always be clear cut and agreed upon when one discusses ethical and political issues, as well as whether or not one should aim for an immunization of individuals and networks touches upon questions related to individuals’ autonomy and agency. In sum, our project aims at providing novel empirical insights into the mechanisms and processes leading to and resulting from web immunity, including its ethical challenges. This knowledge we expect to inspire critical future research and to help address some of the most pressing societal issues of our time.
Summary of project results
People living in modern, digitized societies have to process an unprecedented amount of online information on a daily basis. Crucially, they often actively participate in the development and spread of this information through the online social networks they belong to. The main challenge addressed by the project was the widespread dissemination of misinformation on social networks. This issue poses significant risks to public health, democratic processes, and social cohesion. The project sought to explore innovative ways to counter misinformation using principles of epidemiology and machine learning techniques. By conceptualizing misinformation as an infectious phenomenon, we aimed to identify patterns in its spread, understand factors contributing to its propagation, and develop scalable solutions to mitigate its impact online.
The project conducted a series of experiments to uncover the psychological and social factors correlated with the spread and consumption of misinformation online. These experiments combined self-reported psychological data with behavioral data collected from social network activity, providing a robust, multi-dimensional analysis of misinformation dynamics. Additionally, we designed and tested practical interventions to reduce susceptibility to misinformation, including strategies to enhance critical thinking. Finally, the project developed an ethics framework for the use of artificial intelligence in online social science research, ensuring responsible deployment of AI tools in this field.
The project delivered significant outcomes for multiple beneficiary groups and has the potential to impact everyday reality:
- Impact on social media design: The project tested an intervention (confirmation bias awareness) suitable for deployment across multiple social media platforms, particularly when users browse political or health-related content.
- Impact on education and development of future interventions: The project''s socio-psychological studies provide valuable insights for educators and developers of educational materials targeting misinformation.
These findings can inform the design of messaging tailored to specific groups based on the socio-cognitive and personality profiles of individuals who endorse conspiracy theories. Moreover, one of our studies offers insights into non-content-related aspects of science communication that are crucial for individuals susceptible to misinformation.
Summary of bilateral results
The bilateral collaboration proved to be successful. The partnering research teams had a unique opportunity to work across various sites and disciplines. Two joint funding applications were submitted. Beyond the measurable scientific outcomes of this collaboration, a significant added value from the project was the appointment of young scientists (Ph.D. students) at participating institutions. These scholars benefited from being part of an international and transdisciplinary research environment, which enabled them to develop new perspectives, access unique networking opportunities, and enhance their academic mobility.